486 research outputs found

    Optimized Hierarchical Power Oscillations Control for Distributed Generation Under Unbalanced Conditions

    Full text link
    Control structures have critical influences on converter-interfaced distributed generations (DG) under unbalanced conditions. Most of previous works focus on suppressing active power oscillations and ripples of DC bus voltage. In this paper, the relationship between amplitudes of the active power oscillations and the reactive power oscillations are firstly deduced and the hierarchical control of DG is proposed to reduce power oscillations. The hierarchical control consists of primary and secondary levels. Current references are generated in primary control level and the active power oscillations can be suppressed by a dual current controller. Secondary control reduces the active power and reactive power oscillations simultaneously by optimal model aiming for minimum amplitudes of oscillations. Simulation results show that the proposed secondary control with less injecting negative-sequence current than traditional control methods can effectively limit both active power and reactive power oscillations.Comment: Accepted by Applied Energ

    Convex Latent-Optimized Adversarial Regularizers for Imaging Inverse Problems

    Full text link
    Recently, data-driven techniques have demonstrated remarkable effectiveness in addressing challenges related to MR imaging inverse problems. However, these methods still exhibit certain limitations in terms of interpretability and robustness. In response, we introduce Convex Latent-Optimized Adversarial Regularizers (CLEAR), a novel and interpretable data-driven paradigm. CLEAR represents a fusion of deep learning (DL) and variational regularization. Specifically, we employ a latent optimization technique to adversarially train an input convex neural network, and its set of minima can fully represent the real data manifold. We utilize it as a convex regularizer to formulate a CLEAR-informed variational regularization model that guides the solution of the imaging inverse problem on the real data manifold. Leveraging its inherent convexity, we have established the convergence of the projected subgradient descent algorithm for the CLEAR-informed regularization model. This convergence guarantees the attainment of a unique solution to the imaging inverse problem, subject to certain assumptions. Furthermore, we have demonstrated the robustness of our CLEAR-informed model, explicitly showcasing its capacity to achieve stable reconstruction even in the presence of measurement interference. Finally, we illustrate the superiority of our approach using MRI reconstruction as an example. Our method consistently outperforms conventional data-driven techniques and traditional regularization approaches, excelling in both reconstruction quality and robustness

    Matrix Completion-Informed Deep Unfolded Equilibrium Models for Self-Supervised k-Space Interpolation in MRI

    Full text link
    Recently, regularization model-driven deep learning (DL) has gained significant attention due to its ability to leverage the potent representational capabilities of DL while retaining the theoretical guarantees of regularization models. However, most of these methods are tailored for supervised learning scenarios that necessitate fully sampled labels, which can pose challenges in practical MRI applications. To tackle this challenge, we propose a self-supervised DL approach for accelerated MRI that is theoretically guaranteed and does not rely on fully sampled labels. Specifically, we achieve neural network structure regularization by exploiting the inherent structural low-rankness of the kk-space data. Simultaneously, we constrain the network structure to resemble a nonexpansive mapping, ensuring the network's convergence to a fixed point. Thanks to this well-defined network structure, this fixed point can completely reconstruct the missing kk-space data based on matrix completion theory, even in situations where full-sampled labels are unavailable. Experiments validate the effectiveness of our proposed method and demonstrate its superiority over existing self-supervised approaches and traditional regularization methods, achieving performance comparable to that of supervised learning methods in certain scenarios
    • …
    corecore